Detecting and responding to moralised misinformation


Matthew Andreotta1 Cécile Paris2 Ingrid van Putten1 Mark Hurlstone3 Iain Walker4 Fabio Boschetti1

1Environment, CSIRO
2Data61, CSIRO
3Department of Psychology, Lancaster University
4Melbourne School of Psychological Sciences, University of Melbourne


26 September 2023



 Email: matthew.andreotta@csiro.au

 Blog: matt-lab.github.io

 LinkedIn: @matthew-andreotta



 Link to slides: matt-lab.github.io/workshop_moral-misinformation

Outline


  1. The concept of moralised mis/disinformation.
  2. A tool for detecting moral features of language.
  3. An application to environmental debates on Twitter/X.
  4. Other potential applications.

Disinformation narratives about the Russo-Ukrainian War

  • Examples of pro-Russian disinformation (Guard, 2023):
    • Russian-speaking residents in Donbas were subjected to genocide.
    • Nazi ideology is driving Ukraine’s political leadership.
    • The U.S. has a network of bioweapons labs in Eastern Europe.
  • Each narrative uses moral convictions to frame Ukraine and its allies negatively.
  • Moral convictions are ideas or feelings of what is fundamentally right or wrong, good or evil, just or unjust (Malle, 2021; Skitka, Hanson, Morgan, & Wisneski, 2021).
  • Moralised messages are messages that embed attitudes, objects, activity, or rhetoric in moral convictions (Malle, 2021; Rozin, 1999; Skitka et al., 2021).

Moralised messages

  • Disinformation and misinformation can be particularly moralised.
  • Moralised language is more prominent in articles from non-reputable sources (Carrasco-Farré, 2022).
  • Though, moralised messages are not necessarily less true than neutral messages.
  • E.g., North Atlantic Treaty Organization (NATO) and Vladimir Putin’s public addresses in late February, 2022 both leveraged moralised language (Demasi, 2022).

The consequences of moralised messages

Moralised messages can grant their audience three licenses:

  1. License to be uncompromising
  2. License to act with with the like-minded
  3. License to share falsehoods

Detecting moralised messages in text

What does pmoral tell us about a message?

Quantify two characteristics:

  1. Moral recognition - whether an message is at all related to moral convictions.
    • For any word in a message, is pmoral larger than pneutral?
    • Changes can reflect a shift from preference or narrow opinion (Skitka et al., 2021).
  2. Moral relevance - degree to which message is related to moral convictions.
    • For moral words, how large is pmoral?
    • Changes can reflect a shift in intensity of moral conviction (Skitka et al., 2021).

An application: moralisation of the Great Barrier Reef by climate change contrarians

An application: moralisation of the Great Barrier Reef by climate change contrarians

Research Question

Do climate change contrarians post more moralised messages about the Great Barrier Reef than other users who post about climate change?

  • Used tweet archive from CSIRO’s Emergency Situation Awareness platform (CSIRO, 2023; Power, Robinson, & Cameron, 2023; Power, Robinson, Colton, & Cameron, 2014).
  • Retrieved English tweets posted by Australian users that mentioned the Great Barrier Reef.
  • Examined two groups of users based on text in tweets:
    1. Contrarians. 564 users who posted 7,797 tweets (e.g., “#ClimateCult”, “alaramists”)
    2. Baseline. 16,862 users who posted 187,081 tweets (e.g., “#ClimateChange”, “#ClimateAction”)

An application: moralisation of the Great Barrier Reef by climate change contrarians

  • Generally, content was moralised and negative (e.g., “immoral”, “vile”, “selfishness”).
  • Moral recongition. The proportion of moral tweets posted by contrarians (70.68%) was greater than the proportion of moral tweets posted by the baseline (65.05%), p < .001.
  • Moral relevance. The intensity of moral tweets was greater for contrarians (pmoral = 0.5186) than the baseline group (pmoral = 0.5182), p = 0.018.
  • Contrarians uploaded more moral content about the Great Barrier Reef than other users.

Other applications

  • Detecting moralised messages of the Great Barrier Reef:
    • Moralised messages at the level of an entity (‘thing’).
    • Moralised messages at the level of users.
  • Tracking moralised messages over time, of a particular entity or emerging event.
  • Can be used to indicate polarity (vices versus virtues) and domain of moral concern (e.g., fairness, harm).
  • Can inform communication strategies (Feinberg & Willer, 2019; Kodapanakkal et al., 2022):
    • Moral framing. Messages that appeal to the perceived moral basis of an issue.
    • Non-moral framing. Neutral messages to de-escalate conflict.
  • Approach to detecting moralised messages in any text (e.g., news articles, speeches).
  • Can be useful across different topics.
  • Can be useful when truth is not known and emerging.

Summary

Misinformation may be more consequential and likely when messages are moralised.

  • Disinformation and misinformation are often embedded in broader moral convictions of what is right and wrong.
  • Moral convictions can motivate people to be uncompromising, be hostile to those with different views, and share misinformation.
  • Moralised messages are not necessarily false.
  • Detecting moralised messages can help identify issues where falsehoods are more likely to be shared, to be believed, and to be consequential.
  • Moralised messages can be detected by a moral sentiment inference algorithm.
  • An application demonstrated users who posted climate change contrarian content are more likely to post moralised messages about the Great Barrier Reef than other users.


 Link to slides: matt-lab.github.io/workshop_moral-misinformation

 Email: matthew.andreotta@csiro.au

References

Brady, W. J., Gantman, A. P., & Van Bavel, J. J. (2020). Attentional capture helps explain why moral and emotional content go viral. Journal of Experimental Psychology. General, 149(4), 746–756. https://doi.org/10.1037/xge0000673
Brady, W. J., Jackson, J. C., Lindström, B., & Crockett, M. (2023). Algorithm-Mediated Social Learning in Online Social Networks. OSF Preprints. https://doi.org/10.31219/osf.io/yw5ah
Brady, W. J., & Van Bavel, J. J. (2021). Estimating the effect size of moral contagion in online networks: A pre-registered replication and meta-analysis. OSF Preprints. https://doi.org/10.31219/osf.io/s4w2x
Carrasco-Farré, C. (2022). The fingerprints of misinformation: How deceptive content differs from reliable sources in terms of cognitive effort and appeal to emotions. Humanities and Social Sciences Communications, 9(1), 1–18. https://doi.org/10.1057/s41599-022-01174-9
Coan, T. G., Boussalis, C., Cook, J., & Nanko, M. O. (2021). Computer-assisted classification of contrarian claims about climate change. Scientific Reports, 11(1), 22320. https://doi.org/10.1038/s41598-021-01714-4
Cook, J., Nuccitelli, D., Green, S. A., Richardson, M., Winkler, B., Painting, R., … Skuce, A. (2013). Quantifying the Consensus on Anthropogenic Global Warming in the Scientific Literature. Environmental Research Letters, 8(2), 1–7. https://doi.org/10.1088/1748-9326/8/2/024024
CSIRO. (2023). Emergency Situation Awareness. https://esa.csiro.au/aus/about-public.html.
Curry, O. S., Mullins, D. A., & Whitehouse, H. (2019). Is It Good to Cooperate?: Testing the Theory of Morality-as-Cooperation in 60 Societies. Current Anthropology, 60(1), 47–69. https://doi.org/10.1086/701478
Demasi, M. A. (2022). Accountability in the Russo-Ukrainian war: Vladimir Putin versus NATO. Peace and Conflict: Journal of Peace Psychology, No Pagination Specified–No Pagination Specified. https://doi.org/10.1037/pac0000653
Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., … Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29. https://doi.org/10.1038/s44159-021-00006-y
Feinberg, M., & Willer, R. (2019). Moral reframing: A technique for effective and persuasive communication across political divides. Social and Personality Psychology Compass, 13(12), e12501. https://doi.org/10.1111/spc3.12501
Firth, J. (1957). A Synopsis of Linguistic Theory, 1930-1955. In Special Volume of the Philological Society. Studies in Linguistic Analysis (pp. 1–32). Oxford: Blackwell.
Garrett, K. N., & Bankert, A. (2020). The Moral Roots of Partisan Division: How Moral Conviction Heightens Affective Polarization. British Journal of Political Science, 50(2), 621–640. https://doi.org/10.1017/S000712341700059X
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96(5), 1029–1046. https://doi.org/10.1037/a0015141
Guard, N. (2023). Russia-Ukraine Disinformation Tracking Center.
Helgason, B. A., & Effron, D. A. (2022). It might become true: How prefactual thinking licenses dishonesty. Journal of Personality and Social Psychology, 123(5), 909–940. https://doi.org/10.1037/pspa0000308
Hoover, J., Johnson, K., Boghrati, R., Graham, J., & Dehghani, M. (2018). Moral Framing and Charitable Donation: Integrating Exploratory Social Media Analyses and Confirmatory Experimentation. Collabra: Psychology, 4(1), 9. https://doi.org/10.1525/collabra.129
Kennedy, B., Golazizian, P., Trager, J., Atari, M., Hoover, J., Mostafazadeh Davani, A., & Dehghani, M. (2023). The (moral) language of hate. PNAS Nexus, 2(7), pgad210. https://doi.org/10.1093/pnasnexus/pgad210
Kodapanakkal, R. I., Brandt, M. J., Kogler, C., & van Beest, I. (2022). Moral relevance varies due to inter-individual and intra-individual differences across big data technology domains. European Journal of Social Psychology, 52(1), 46–70. https://doi.org/10.1002/ejsp.2814
Konkes, C., & Foxwell-Norton, K. (2021). Science communication and mediatised environmental conflict: A cautionary tale. Public Understanding of Science, 30(4), 470–483. https://doi.org/10.1177/0963662520985134
Lamb, W. F., Mattioli, G., Levi, S., Roberts, J. T., Capstick, S., Creutzig, F., … Steinberger, J. K. (2020). Discourses of climate delay. Global Sustainability, 3, e17. https://doi.org/10.1017/sus.2020.13
Malle, B. F. (2021). Moral Judgments. Annual Review of Psychology, 72(1), 293–318. https://doi.org/10.1146/annurev-psych-072220-104358
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv. Retrieved from https://arxiv.org/abs/1301.3781
Power, R., Robinson, B., & Cameron, M. (2023). Insights from a decade of Twitter monitoring for emergency management. Proceedings of the Information Systems for Crisis Response and Management Asia Pacific Conference 2022.
Power, R., Robinson, B., Colton, J., & Cameron, M. (2014). Emergency Situation Awareness: Twitter case studies. Information Systems for Crisis Response and Management in Mediterranean Countries, 218–231. Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-11818-5_19
Ramezani, A., Zhu, Z., Rudzicz, F., & Xu, Y. (2021). An unsupervised framework for tracing textual sources of moral change. 1215–1228. https://doi.org/10.18653/v1/2021.findings-emnlp.105
Rozin, P. (1999). The Process of Moralization. Psychological Science, 10(3), 218–221. https://doi.org/10.1111/1467-9280.00139
Ryan, T. J. (2017). No Compromise: Political Consequences of Moralized Attitudes. American Journal of Political Science, 61(2), 409–423. https://doi.org/10.1111/ajps.12248
Skitka, L. J., Hanson, B. E., Morgan, G. S., & Wisneski, D. C. (2021). The Psychology of Moral Conviction. Annual Review of Psychology, 72(1), 347–366. https://doi.org/10.1146/annurev-psych-063020-030612
Solovev, K., & Pröllochs, N. (2022). Moralized language predicts hate speech on social media. PNAS Nexus, pgac281. https://doi.org/10.1093/pnasnexus/pgac281
Spring, V. L., Cameron, C. D., & Cikara, M. (2018). The Upside of Outrage. Trends in Cognitive Sciences, 22(12), 1067–1069. https://doi.org/10.1016/j.tics.2018.09.006
Većkalov, B., Geiger, S. J., White, M. P., Rutjens, B., Harreveld, F. van, Stablum, F., … Linden, D. S. van der. (2023). A 27-country test of communicating the scientific consensus on climate change. OSF Preprints. https://doi.org/10.31219/osf.io/bctm3
Warriner, A. B., Kuperman, V., & Brysbaert, M. (2013). Norms of valence, arousal, and dominance for 13,915 English lemmas. Behavior Research Methods, 45(4), 1191–1207. https://doi.org/10.3758/s13428-012-0314-x
Xie, J. Y., Ferreira Pinto Junior, R., Hirst, G., & Xu, Y. (2019). Text-based inference of moral sentiment change. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 4654–4663. Hong Kong, China: Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1472